If f:V→W is a linear map and dimension V=dimension W=n, the following are equivalent:
f is onto (i.e. there is a map g:W→V such that f∘g=IdW)
f is one-to-one (i.e. there is a map g:W→V such that g∘f=IdV)
rank(f)=n
nullity(f)=0
f is an isomorphism
Change of Basis
Changing the representation of a vector v from one basis to another.
The vector itself is the same, just the representations change. So,
the change of basis matrix for bases B and D is the identity map id:V→V with respect to those bases. RepB,D(id)=⎝⎜⎜⎜⎛⋮RepD(β1)⋮⋯⋮RepD(βn)⋮⎠⎟⎟⎟⎞
This has the effect such that RepB,D(id)RepB(v)=RepD(v)
And conversely, if a matrix M satisfies MRepB(v)=RepD(v)
then M is a change of basis matrix
Example 11.1
Find the change of basis matrix for the following bases B,D∈P2 B=⟨1,1+x,1+x+x2⟩D=⟨x2−1,x,x2+1⟩
Call the matrix M. Since this is an identity map, M(1)=RepD(1)=⎝⎜⎛−1/201/2⎠⎟⎞ M(1+x)=RepD(1+x)=⎝⎜⎛−1/211/2⎠⎟⎞ M(1+x+x2)=RepD(1+x+x2)=⎝⎜⎛011⎠⎟⎞
So, RepB,D(id)=⎝⎜⎛−1/201/2−1/211/2011⎠⎟⎞
Now, suppose we want to take v=3+2x+4x2 RepB(v)=⎝⎜⎛1−24⎠⎟⎞
Then we can change basis to D by multiplying RepD(v)=⎝⎜⎛−1/201/2−1/211/2011⎠⎟⎞⎝⎜⎛1−24⎠⎟⎞=⎝⎜⎛1/227/2⎠⎟⎞
And indeed, 1/2(x2−1)+2(x)+7/2(x2+1)=3+2x+4x2=v
For a matrix M, M changes basis ⟺M is nonsingular
Proof
For the forward direction, we must prove M chnages basis ⟹M is nonsingular.
Since changing from basis B to D is invertible (changing from basis D to B), the M must also be invertible, therefore it is nonsingular.
For the reverse direction, we must prove M is nonsingular ⟹ M is a change of basis matrix.
For this, since M is nonsingular,it is a product of elementary reduction matrices (see Ch.10 for proof), so we only need to show that each elementary reduction matrix changes basis.
First, the matrix which multiplies row i by a constant k changes basis from ⟨β1,...,βi,...,βn⟩ to ⟨β1,...,k1βi,...,βn⟩ v==c1β1+⋯+ciβi+⋯+cnβnc1β1+⋯+kci(k1βi)+⋯+cnβn=v
For the matrix that swaps two rows, the corresponding basis vectors also simply swap.
For the matrix that adds a multiple of one row to another (i.e. kci+cj), the basis changes from ⟨β1,...,βi,...,βj,...,βn⟩ to ⟨β1,...,βi−kβj,...,βj,...,βn⟩ v==c1β1+⋯+ciβi+⋯+cjβj+⋯+cnβnc1β1+⋯+ci(βi−kβj)+⋯+(kci+cj)βj+⋯+cnβn=v
(notice that kciβj cancels out)
So, applying an elementary reduction matrix changes basis, so any invertible matrix must be a change of basis matrix.
Since a change of basis matrix is an identity matrix with repsect to two basis, we also have M is nonsingular ⟺M is an identity matrix between two bases
Changing map representations
The next thing to consider is changing the bases of a map from RepB,D(h) to RepB^,D^(h)
There are two ways from taking a vector with respect to B^ and mapping it to a vector with respect to D^: 1.RepB^idRepBHRepDidRepD^ 2.RepB^H^RepD^
From this, we can see: H^=RepD,D^(id)⋅H⋅RepB^,B(id)
In other words, to find the matrix representation of the map h:VwrtB^→WwrtD^, multiply the change of basis matrix from B^ to B with the matrix representation of h:VwrtB→WwrtD, then the change of basis matrix from D from D^
Example 11.2
Consider the transformation map h:R3→R3 represented by RepE3,E3(h)=H=⎝⎜⎛101012213⎠⎟⎞
This represents a transformation that takes a vector ⎝⎜⎛111⎠⎟⎞ to a different vector like ⎝⎜⎛326⎠⎟⎞
(note: both are with respect to the standard basis) ⎝⎜⎛111⎠⎟⎞h⎝⎜⎛326⎠⎟⎞
Suppose we want to convert to the basis B^=D^=⟨⎝⎜⎛100⎠⎟⎞,⎝⎜⎛110⎠⎟⎞,⎝⎜⎛111⎠⎟⎞⟩
Under this basis, ⎝⎜⎛111⎠⎟⎞E3=⎝⎜⎛001⎠⎟⎞B^ and ⎝⎜⎛326⎠⎟⎞E3=⎝⎜⎛1−46⎠⎟⎞B^ so ⎝⎜⎛001⎠⎟⎞B^h⎝⎜⎛1−46⎠⎟⎞B^
This is the same map, but under a different representation. We want to find the matrix representation of this map, H^
First find RepB^,E3(id) ⎝⎜⎛100⎠⎟⎞B^=⎝⎜⎛100⎠⎟⎞E3,⎝⎜⎛010⎠⎟⎞B^=⎝⎜⎛110⎠⎟⎞E3,⎝⎜⎛001⎠⎟⎞B^=⎝⎜⎛111⎠⎟⎞E3 ⟹RepB^,E3(id)=⎝⎜⎛100110111⎠⎟⎞
Next we find RepE3,B^(id) ⎝⎜⎛100⎠⎟⎞E3=⎝⎜⎛100⎠⎟⎞B^,⎝⎜⎛010⎠⎟⎞E3=⎝⎜⎛−110⎠⎟⎞B^,⎝⎜⎛001⎠⎟⎞E3=⎝⎜⎛0−11⎠⎟⎞B^ ⟹RepE3,B^(id)=⎝⎜⎛100−1100−11⎠⎟⎞
You can also use the fact that RepE3,B^(id)=(RepB^,E3(id))−1
since the inverse of id is id
Finally, we have H^=⎝⎜⎛100−1100−11⎠⎟⎞⋅⎝⎜⎛101012213⎠⎟⎞⋅⎝⎜⎛100110111⎠⎟⎞=⎝⎜⎛1−110−231−46⎠⎟⎞
Same-sized matrices H and H^ are matrix equivalent if there exists nonsingular matrices P and Q such that H^=PHQ
In other words, matrix equivalent matrices represent the same map under different bases; it is an equivalence relation
Naturally, we want to find the canonical form for the equivalence classes.
Any m×n matrix with rank k is equivalent to the m×n matrix where every element is zero except the first k diagonal entries being one ⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎛10⋮00⋮001⋮00⋮0⋯⋯⋯⋯⋯00⋮10⋮000⋮00⋮0⋯⋯⋯⋯⋯00⋮00⋮0⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎞
It can be represented by the block partial-identity form (IZZZ)=⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎛10⋮00⋮001⋮00⋮0⋯⋯⋯⋯⋯00⋮10⋮000⋮00⋮0⋯⋯⋯⋯⋯00⋮00⋮0⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎞
with I being the identity matrix and Z being the zero matrix
Two proofs below:
Proof 1
Theorem: For a linear map h:V→W with rank k, there exists bases B and D such that RepB,D(h)=(IkZZZ)
Let V have dimension n and W have dimension m. ⟹nullity(h)=n−k ⟹ we can find n−k vectors βk+1,...,βn that form a basis for N(h)⊂V ⟹ those vectors can be extended to B=⟨β1,...,βk,βk+1,...,βn⟩ ⟹R(h)=span{h(β1),...,h(βk),h(βk+1),...,h(βn)}⊂W ⟹R(h)=span{h(β1),...,h(βk)} since h(βi)=0 for i=k+1,...,n ⟹δ1=h(β1),...,δk=h(βk) form a basis for R(h) ⟹ can be extended to basis D=⟨δ1,...,δk,δk+1,...,δn⟩ of W
Thus, we have h(βi)=δi for i=1,...,k and h(βi)=0 for i=k1,...,n, so we have found B and D such that RepB,D(h)=(IkZZZ)
as desired.
Theorem: any m×n matrix is matrix equivalent to a m×n matrix of the form (IZZZ)
Suppose M is an m×n matrix. For a linear map h:V→W and appropriate bases B1 of V and D1 of W, we have M=RepB1,D1(h).
From the previous theorem, we can find bases B of V and D of W such that RepB,D(h)=(IkZZZ).
These represent the same linear map under different bases, so they are matrix equivalent.
Proof 2
First recall the elementary row operation matrices act on rows if multiplied on the left and on columns if multiplied on the right.
For a matrix M, we can apply row-reduction to get an (not reduced) echelon form matrix R. Combine those reduction matrices (with right-to-left multiplication) to a single matrix P. Thus, we have PM=R.
Then column-reduce R to get a matrix in block partial-identity form. Combine those operations (with left-to-right multiplication) to get a matrix Q. So, we have PMQ=RQ=(IZZZ)
Example 11.3
Consider this rank 2 matrix M=⎝⎜⎛147258369⎠⎟⎞
We will find P and Q such that PMQ is the canonical matrix of rank 2.
Row reducing gives ⎝⎜⎛147258369⎠⎟⎞−4ρ1+ρ2−7ρ1+ρ3−2ρ2+ρ3−1/3ρ2⎝⎜⎛100210320⎠⎟⎞
So we have (note right to left) P=⎝⎜⎛1000−1/30001⎠⎟⎞⎝⎜⎛10001−2001⎠⎟⎞⎝⎜⎛1−4−7010001⎠⎟⎞=⎝⎜⎛14/310−1/3−2001⎠⎟⎞
Next we perform column operations ⎝⎜⎛100210320⎠⎟⎞−2col2+col3−2col1+col2col1+col3⎝⎜⎛100010000⎠⎟⎞
So (note left to right) Q=⎝⎜⎛1000100−21⎠⎟⎞⎝⎜⎛100−210101⎠⎟⎞=⎝⎜⎛100−2101−21⎠⎟⎞
In sum, we have M being row equivalent to PMQ=⎝⎜⎛14/310−1/3−2001⎠⎟⎞⎝⎜⎛147258369⎠⎟⎞⎝⎜⎛100−2101−21⎠⎟⎞=⎝⎜⎛100010000⎠⎟⎞
The effect of a block partial-identity matrix is easy to understand: for example, ⎝⎜⎛100010000⎠⎟⎞⎝⎜⎛xyz⎠⎟⎞=⎝⎜⎛xy0⎠⎟⎞
is a projection
Matrix equivalence classes are characterized by the rank:
two matrices are equivalent iff they have the same rank
Proof: both are equivalent to the same block partial-identity matrix.
In particular, an n×n matrix is equivalent to In iff it is invertible.